Robust Heterogeneous Graph Neural Networks against Adversarial Attacks

نویسندگان

چکیده

Heterogeneous Graph Neural Networks (HGNNs) have drawn increasing attention in recent years and achieved outstanding performance many tasks. However, despite their wide use, there is currently no understanding of robustness to adversarial attacks. In this work, we first systematically study the HGNNs show that they can be easily fooled by adding edge between target node large-degree (i.e., hub). Furthermore, two key reasons for such vulnerability HGNNs: one perturbation enlargement effect, i.e., HGNNs, failing encode transiting probability, will enlarge effect hub comparison GCNs, other soft mechanism, mechanism assigns positive values obviously unreliable neighbors. Based on facts, propose a novel robust HGNN framework RoHe against topology attacks equipping an purifier, which prune malicious neighbors based feature. Specifically, eliminate enlargement, introduce metapath-based probability as prior criterion restraining confidence from hub. Then purifier learns mask out with low confidence, thus effectively alleviate negative mechanism. Extensive experiments different benchmark datasets multiple are conducted, where considerable improvement under demonstrate effectiveness generalization ability our defense framework.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Towards Imperceptible and Robust Adversarial Example Attacks against Neural Networks

Machine learning systems based on deep neural networks, being able to produce state-of-the-art results on various perception tasks, have gained mainstream adoption in many applications. However, they are shown to be vulnerable to adversarial example attack, which generates malicious output by adding slight perturbations to the input. Previous adversarial example crafting methods, however, use s...

متن کامل

Robust Convolutional Neural Networks under Adversarial Noise

Recent studies have shown that Convolutional Neural Networks (CNNs) are vulnerable to a small perturbation of input called “adversarial examples”. In this work, we propose a new feedforward CNN that improves robustness in the presence of adversarial noise. Our model uses stochastic additive noise added to the input image and to the CNN models. The proposed model operates in conjunction with a C...

متن کامل

Stability in Heterogeneous Multimedia Networks under Adversarial Attacks

A distinguishing feature of today's large-scale platforms for multimedia distribution and communication, such as the Internet, is their heterogeneity, predominantly manifested by the fact that a variety of communication protocols are simultaneously running over different hosts. A fundamental question that naturally arises for such common settings of heterogeneous multimedia systems concerns the...

متن کامل

MagNet and "Efficient Defenses Against Adversarial Attacks" are Not Robust to Adversarial Examples

MagNet and “Efficient Defenses...” were recently proposed as a defense to adversarial examples. We find that we can construct adversarial examples that defeat these defenses with only a slight increase in distortion.

متن کامل

Robust Deep Reinforcement Learning with Adversarial Attacks

This paper proposes adversarial attacks for Reinforcement Learning (RL) and then improves the robustness of Deep Reinforcement Learning algorithms (DRL) to parameter uncertainties with the help of these attacks. We show that even a naively engineered attack successfully degrades the performance of DRL algorithm. We further improve the attack using gradient information of an engineered loss func...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence

سال: 2022

ISSN: ['2159-5399', '2374-3468']

DOI: https://doi.org/10.1609/aaai.v36i4.20357